266,697 research outputs found

    Quantum Semi-Markov Processes

    Get PDF
    We construct a large class of non-Markovian master equations that describe the dynamics of open quantum systems featuring strong memory effects, which relies on a quantum generalization of the concept of classical semi-Markov processes. General conditions for the complete positivity of the corresponding quantum dynamical maps are formulated. The resulting non-Markovian quantum processes allow the treatment of a variety of physical systems, as is illustrated by means of various examples and applications, including quantum optical systems and models of quantum transport.Comment: 4 pages, revtex, no figures, to appear in Phys. Rev. Let

    Markov two-components processes

    Get PDF
    We propose Markov two-components processes (M2CP) as a probabilistic model of asynchronous systems based on the trace semantics for concurrency. Considering an asynchronous system distributed over two sites, we introduce concepts and tools to manipulate random trajectories in an asynchronous framework: stopping times, an Asynchronous Strong Markov property, recurrent and transient states and irreducible components of asynchronous probabilistic processes. The asynchrony assumption implies that there is no global totally ordered clock ruling the system. Instead, time appears as partially ordered and random. We construct and characterize M2CP through a finite family of transition matrices. M2CP have a local independence property that guarantees that local components are independent in the probabilistic sense, conditionally to their synchronization constraints. A synchronization product of two Markov chains is introduced, as a natural example of M2CP.Comment: 34 page

    Feature Markov Decision Processes

    Full text link
    General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in a companion article.Comment: 7 page

    Metastability in Markov processes

    Full text link
    We present a formalism to describe slowly decaying systems in the context of finite Markov chains obeying detailed balance. We show that phase space can be partitioned into approximately decoupled regions, in which one may introduce restricted Markov chains which are close to the original process but do not leave these regions. Within this context, we identify the conditions under which the decaying system can be considered to be in a metastable state. Furthermore, we show that such metastable states can be described in thermodynamic terms and define their free energy. This is accomplished showing that the probability distribution describing the metastable state is indeed proportional to the equilibrium distribution, as is commonly assumed. We test the formalism numerically in the case of the two-dimensional kinetic Ising model, using the Wang--Landau algorithm to show this proportionality explicitly, and confirm that the proportionality constant is as derived in the theory. Finally, we extend the formalism to situations in which a system can have several metastable states.Comment: 30 pages, 5 figures; version with one higher quality figure available at http://www.fis.unam.mx/~dsanders

    Towards a General Theory of Stochastic Hybrid Systems

    Get PDF
    In this paper we set up a mathematical structure, called Markov string, to obtaining a very general class of models for stochastic hybrid systems. Markov Strings are, in fact, a class of Markov processes, obtained by a mixing mechanism of stochastic processes, introduced by Meyer. We prove that Markov strings are strong Markov processes with the cadlag property. We then show how a very general class of stochastic hybrid processes can be embedded in the framework of Markov strings. This class, which is referred to as the General Stochastic Hybrid Systems (GSHS), includes as special cases all the classes of stochastic hybrid processes, proposed in the literature

    Multiple-Environment Markov Decision Processes

    Get PDF
    We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set of probabilistic transition functions. The goal in a MEMDP is to synthesize a single controller with guaranteed performances against all environments even though the environment is unknown a priori. While MEMDPs can be seen as a special class of partially observable MDPs, we show that several verification problems that are undecidable for partially observable MDPs, are decidable for MEMDPs and sometimes have even efficient solutions

    Markov cubature rules for polynomial processes

    Full text link
    We study discretizations of polynomial processes using finite state Markov processes satisfying suitable moment matching conditions. The states of these Markov processes together with their transition probabilities can be interpreted as Markov cubature rules. The polynomial property allows us to study such rules using algebraic techniques. Markov cubature rules aid the tractability of path-dependent tasks such as American option pricing in models where the underlying factors are polynomial processes.Comment: 29 pages, 6 Figures, 2 Tables; forthcoming in Stochastic Processes and their Application
    corecore